3 research outputs found

    On the Sample Information About Parameter and Prediction

    Full text link
    The Bayesian measure of sample information about the parameter, known as Lindley's measure, is widely used in various problems such as developing prior distributions, models for the likelihood functions and optimal designs. The predictive information is defined similarly and used for model selection and optimal designs, though to a lesser extent. The parameter and predictive information measures are proper utility functions and have been also used in combination. Yet the relationship between the two measures and the effects of conditional dependence between the observable quantities on the Bayesian information measures remain unexplored. We address both issues. The relationship between the two information measures is explored through the information provided by the sample about the parameter and prediction jointly. The role of dependence is explored along with the interplay between the information measures, prior and sampling design. For the conditionally independent sequence of observable quantities, decompositions of the joint information characterize Lindley's measure as the sample information about the parameter and prediction jointly and the predictive information as part of it. For the conditionally dependent case, the joint information about parameter and prediction exceeds Lindley's measure by an amount due to the dependence. More specific results are shown for the normal linear models and a broad subfamily of the exponential family. Conditionally independent samples provide relatively little information for prediction, and the gap between the parameter and predictive information measures grows rapidly with the sample size.Comment: Published in at http://dx.doi.org/10.1214/10-STS329 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Abstract

    No full text
    The ability to predict the choices of prospective passengers allows airlines to alleviate the need for overbooking flights and subsequently bumping passengers, potentially leading to improved customer satisfaction. Past studies have typically focused on identifying the important factors that influence choice behaviors and applied discrete choice framework models to model passengers’ airline choices. Typical discrete choice models rely on two major assumptions: the existence of a utility function that represents the preferences over a choice set and the linearity of the utility function with respect to attributes of alternatives and decision makers. These assumptions allow the discrete choice models to be easily interpreted as each unit change of an input attribute can be directly translated into change in utility that eventually affects the optimal choice. However, these restrictive assumptions might impede the ability of typical discrete choice models to deliver operational accurate prediction and forecasts. In this paper, we focus on developing operational models that are intended for supporting the actual prediction decisions of airlines. We propose two alternative approaches, pairwise preference learning using classification techniques and ranking function learning using evolutionary computation. We have empirically compared these approaches against the standard discrete choice framework models and report some promising results in this paper
    corecore